frame problem
Exploring Syntropic Frameworks in AI Alignment: A Philosophical Investigation
The alignment problem--ensuring advanced AI systems act in accordance with human values-- represents one of the most pressing philosophical and technical challenges of our time. As Bostrom (2014) and Russell (2019) have argued, the difficulty lies not merely in creating capable systems, but in ensuring these systems remain beneficial as their capabilities grow. Current approaches typically attempt to specify human values directly, whether through reward modeling, constitutional AI, or iterative refinement based on human feedback. Yet these content-based approaches face a fundamental philosophical problem: human values are contextual, often contradictory, and resist precise specification. The attempt to encode a complete value system encounters what I call the "specification trap"--the more precisely we attempt to define our values, the more we realize their dependence on implicit knowledge, cultural context, and evolutionary history that cannot be fully articulated. This paper's central thesis is that alignment should be reconceived not as a problem of value specification but as one of process architecture: creating syntropic, reasons-responsive agents whose values emerge through embodied multi-agent interaction rather than being encoded through training. What follows is a framework and research program proposal rather than a report of completed empirical results. I defend this thesis through four interconnected arguments that support three central contributions. Part I diagnoses the specification trap that makes content-based approaches structurally unstable.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Evaluating Large Language Models on the Frame and Symbol Grounding Problems: A Zero-shot Benchmark
Recent advancements in large language models (LLMs) have revitalized philosophical debates surrounding artificial intelligence. Two of the most fundamental challenges - namely, the Frame Problem and the Symbol Grounding Problem - have historically been viewed as unsolvable within traditional symbolic AI systems. This study investigates whether modern LLMs possess the cognitive capacities required to address these problems. To do so, I designed two benchmark tasks reflecting the philosophical core of each problem, administered them under zero-shot conditions to 13 prominent LLMs (both closed and open-source), and assessed the quality of the models' outputs across five trials each. Responses were scored along multiple criteria, including contextual reasoning, semantic coherence, and information filtering. The results demonstrate that while open-source models showed variability in performance due to differences in model size, quantization, and instruction tuning, several closed models consistently achieved high scores. These findings suggest that select modern LLMs may be acquiring capacities sufficient to produce meaningful and stable responses to these long-standing theoretical challenges.
Logical foundations of Smart Contracts
Nowadays, sophisticated domains are emerging which require appropriate formalisms to be specified accurately in order to reason about them. One such domain is constituted of smart contracts that have emerged in cyber physical systems as a way of enforcing formal agreements between components of these systems. Smart contracts self-execute to run and share business processes through blockchain, in decentralized systems, with many different participants. Legal contracts are in many cases complex documents, with a number of exceptions, and many subcontracts. The implementation of smart contracts based on legal contracts is a long and laborious task, that needs to include all actions, procedures, and the effects of actions related to the execution of the contract. An ongoing open problem in this area is to formally account for smart contracts using a uniform and somewhat universal formalism. This thesis proposes logical foundations to smart contracts using the Situation Calculus, a logic for reasoning about actions. Situation Calculus is one of the prominent logic-based artificial intelligence approaches that provides enough logical mechanism to specify and implement dynamic and complex systems such as contracts. Situation Calculus is suitable to show how worlds dynamically change. Smart contracts are going to be implement with Golog (written en Prolog), a Situation Calculus-based programming language for modeling complex and dynamic behaviors.
Machines of Meaning
One goal of Artificial Intelligence is to learn meaningful representations for natural language expressions, but what this entails is not always clear. A variety of new linguistic behaviours present themselves embodied as computers, enhanced humans, and collectives with various kinds of integration and communication. But to measure and understand the behaviours generated by such systems, we must clarify the language we use to talk about them. Computational models are often confused with the phenomena they try to model and shallow metaphors are used as justifications for (or to hype) the success of computational techniques on many tasks related to natural language; thus implying their progress toward human-level machine intelligence without ever clarifying what that means. This paper discusses the challenges in the specification of "machines of meaning", machines capable of acquiring meaningful semantics from natural language in order to achieve their goals. We characterize "meaning" in a computational setting, while highlighting the need for detachment from anthropocentrism in the study of the behaviour of machines of meaning. The pressing need to analyse AI risks and ethics requires a proper measurement of its capabilities which cannot be productively studied and explained while using ambiguous language. We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models and help broaden the research perspectives for technology that facilitates dialogues between humans and machines.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (9 more...)
- Health & Medicine (0.67)
- Education (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Rule-Based Reasoning (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.46)
Separability, Contextuality, and the Quantum Frame Problem
Fields, Chris, Glazebrook, James F.
We study the relationship between assumptions of state separability and both preparation and measurement contextuality, and the relationship of both of these to the frame problem, the problem of predicting what does not change in consequence of an action. We state a quantum analog of the latter and prove its undecidability. We show how contextuality is generically induced in state preparation and measurement by basis choice, thermodynamic exchange, and the imposition of a priori causal models, and how fine-tuning assumptions appear ubiquitously in settings characterized as non-contextual.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- (12 more...)
Solving the Frame Problem: A Mathematical Investigation of the Common Sense Law of Inertia (Artificial Intelligence): Shanahan, Murray: 9780262193849: Amazon.com: Books
In 1969, John McCarthy and Pat Hayes uncovered a problem that has haunted the field of artificial intelligence ever since―the frame problem. The problem arises when logic is used to describe the effects of actions and events. Put simply, it is the problem of representing what remains unchanged as a result of an action or event. Many researchers in artificial intelligence believe that its solution is vital to the realization of the field's goals. Solving the Frame Problem presents the various approaches to the frame problem that have been proposed over the years.
Situation Calculus by Term Rewriting
A version of the situation calculus in which situations are represented as first-order terms is presented. Fluents can be computed from the term structure, and actions on the situations correspond to rewrite rules on the terms. Actions that only depend on or influence a subset of the fluents can be described as rewrite rules that operate on subterms of the terms in some cases. If actions are bidirectional then efficient completion methods can be used to solve planning problems. This representation for situations and actions is most similar to the fluent calculus of Thielscher \cite{Thielscher98}, except that this representation is more flexible and more use is made of the subterm structure. Some examples are given, and a few general methods for constructing such sets of rewrite rules are presented. This paper was submitted to FSCD 2020 on December 23, 2019.
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > North Carolina > Orange County > Chapel Hill (0.04)
- (3 more...)
Ray Reiter's Knowledge in Action
What Ray Reiter has done has taken a set of ideas worked out by him and his collaborators over the last 11 years and recrystallized them into a sustained and consistent presentation. This is not a collection of those papers but a complete rewrite that avoids the usual repetition and notational inconsistency that one might expect. It makes one wish everyone as prolific as Reiter would copy his example--but because that's unlikely, we must be grateful for what he has given us. In case you haven't heard, Reiter and his crew, starting with the publication of Reiter (1991), breathed new life into the situation calculus (Mc-Carthy and Hayes 1969) that had gotten the reputation of being of limited expressiveness. The basic concept of the calculus is, of course, the situation, which we can think of as a state of affairs, that is, a complete specification of the truth values of all propositions (in a suitable logical language), although that's closer to McCarthy's and Hayes's traditional formulation than the analysis Reiter settles on (which I describe later).
Book Reviews
Review of The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology If you are interested in writing a review, contact chandra@ cis.ohio-state.edu. AT question: Which one of the following doesn't belong with the rest? It is the only discipline in the list that is not under attack for being conceptually or methodologically confused. Objections to AI and computational cognitive science are myriad. Accordingly, there are many different reasons for these attacks. However, all of them come down to one simple observation: Humans seem a lot smarter than computers--not just smarter as in Einstein was smarter than I, or I am smarter than a chimpanzee, but more like I am smarter than a pencil sharpener. To many, computation seems like the wrong paradigm for studying the mind. All this is because of another truth: The computational paradigm is the best thing to come down the pike since the wheel. The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology, Jerry Fodor, Cambridge, Massachusetts, The MIT Press, 2000, 126 pages, $22.95. Jerry Fodor believes this latter claim. He says: [The computational theory of mind] is, in my view, by far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion.… There is, in short, every reason to suppose that Computational Theory is part of the truth about cognition. It is a fascinating read. This dispute about quantity of truth is where the book gets its title. In 1997, Steven Pinker published an important book describing the current state of the art in cognitive science (see also Plotkin [1997]). Pinker's book is entitled How the Mind Works. In it, he describes how computationalism, psychological nativism (the idea that many of our concepts are innate), massive modularity (the idea that most mental processes occur within a domain-specific, encapsulated specialpurpose processor), and Darwinian adaptationism combine to form a robust (but nascent) theory of mind. Fodor, however, thinks that the mind doesn't work that way or, anyhow, not very much of the mind works that way. Fodor dubs the synthesis of computationalism, nativism, massive modularity, and adaptationism the new synthesis (p.
- Book Review (0.83)
- Overview (0.74)
Workshops
She argued that this ingrained conversational collaboration should be exploited to design successful natural language interfaces. Paul McKevitt, New Mexico State University, described a Wizard of Oz experiment in which it was found that particular sequences of speech act types have implications for the structure of the ensuing dialogue and can be correlated with certain aspects of the user, such as his experience in the domain. McKevitt contended that such empirical data, rather than subjective decision-making, should be the basis for constructing user models and argued for the development of automatic techniques for deriving the models. The second workshop was as successful as the first, with all agreeing that subsequent workshops should be held more frequently than at four year intervals. Since the general trend has been for researchers in different areas of user modeling to operate in isolation, such workshops are particularly important as a means of increasing cooperation and cross-fertilization of ideas among the subdisciplines.